Goto

Collaborating Authors

 matrix inverse



We thank all three reviewers for their thorough reviews and constructive feedback

Neural Information Processing Systems

We thank all three reviewers for their thorough reviews and constructive feedback. Otherwise, including additional second order information can make the results worse. "...CGD still requires that the step-size is bounded by one over the max diagonal entry of the Hessian...": Concern 1: Why not use full second order? See also our answer to Reviewer #7. Concern 3: Is CGD scalable?


How is Linear Algebra Applied for Machine Learning?

#artificialintelligence

Firstly, let's address the building blocks of linear algebra -- scalar, vector, matrix, and tensor. To implement them, we can use NumPy array np.array() in python. Let's look at the shape of the vector, matrix, and tensor we generated above. Similar to how we perform operations on numbers, the same logic also works for matrices and vectors. However, please note that these operations on matrices have restrictions on two matrices being the same size.


Interpolating the Trace of the Inverse of Matrix $\mathbf{A} + t \mathbf{B}$

Ameli, Siavash, Shadden, Shawn C.

arXiv.org Machine Learning

We develop heuristic interpolation methods for the function $t \mapsto \operatorname{trace}\left( (\mathbf{A} + t \mathbf{B})^{-1} \right)$, where the matrices $\mathbf{A}$ and $\mathbf{B}$ are symmetric and positive definite and $t$ is a real variable. This function is featured in many applications in statistics, machine learning, and computational physics. The presented interpolation functions are based on the modification of a sharp upper bound that we derive for this function, which is a new trace inequality for matrices. We demonstrate the accuracy and performance of the proposed method with numerical examples, namely, the marginal maximum likelihood estimation for linear Gaussian process regression and the estimation of the regularization parameter of ridge regression with the generalized cross-validation method.


Does Deep Learning always have to Reinvent the Wheel?

#artificialintelligence

Machine learning and in particular deep learning revolutionize the world as we know it today. We have seen tremendous advances in speech and image recognition, followed by the application of deep learning to many other domains. In many of those domains, deep learning is now the state of the art or is even going beyond it. A clear trend is that networks are growing more and more complex and more and more computationally demanding. Today, we are building ever-increasing networks that are built on top of previous generations of network topologies.


Reducing the Computational Complexity of Pseudoinverse for the Incremental Broad Learning System on Added Inputs

Zhu, Hufei, Wei, Chenghao

arXiv.org Machine Learning

In this brief, we improve the Broad Learning System (BLS) [7] by reducing the computational complexity of the incremental learning for added inputs. We utilize the inverse of a sum of matrices in [8] to improve a step in the pseudoinverse of a row-partitioned matrix. Accordingly we propose two fast algorithms for the cases of q > k and q < k, respectively, where q and k denote the number of additional training samples and the total number of nodes, respectively. Specifically, when q > k, the proposed algorithm computes only a k * k matrix inverse, instead of a q * q matrix inverse in the existing algorithm. Accordingly it can reduce the complexity dramatically. Our simulations, which follow those for Table V in [7], show that the proposed algorithm and the existing algorithm achieve the same testing accuracy, while the speedups in BLS training time of the proposed algorithm over the existing algorithm are 1.24 - 1.30.


Does deep learning always have to reinvent the wheel? MarkTechPost

#artificialintelligence

Machine learning and in particular deep learning revolutionize the world as we know it today. We have seen tremendous advances in speech and image recognition, followed by application of deep learning to many other domains. In many of those domains, deep learning is now state of the art or is even going beyond it. A clear trend is that networks are growing more and more complex and more and more computationally demanding. Today, we are building ever increasing networks that are built on top of previous generations of network topologies.